Now that we understand a bit more about the Kubernetes API and the resources exposed by the API, we can move away from kubectl toward using Go.

In this lesson, we will use Go to do many of the same things we did in the previous section using kubectl. We will authenticate using our default context and create a namespace. However, we will not stop there. We will deploy a load-balanced HTTP application to our cluster and watch the logs stream to STDOUT as we make requests to the service.

svg viewer

Click the “Run” button below to start this demo:

/
workloads
go.mod
go.sum
kind-config.yaml
main.go
The workloads demo

Enter these commands in the terminal above:

Commands to run in the terminal

Wait a few moments for the last command to finish running. Then, continue with these commands:

The preceding commands will create a KinD cluster named workloads and use a config file that will enable host network ingress for the cluster. We will use ingress to expose the service running in the cluster. The command then deploys the NGINX ingress controller and waits for it to be ready. Finally, we run our Go program to deploy our application. After the service has been deployed and is running, open a browser with the URL beneath the Run button. We should see the following when we browse there:

The deployed NGINX hello world
The deployed NGINX hello world

We should see the request logs stream to STDOUT. They should look like the following:

If we refresh the page, we should see the server name change, indicating that the requests are load-balancing across the two pod replicas in the deployment. Press “Ctrl + C” to terminate the Go program.

To tear down the cluster, run the following command:

The preceding command will delete the kind cluster named workloads. Next, let's explore this Go application to understand what just happened.

The code#

Let's dive right into the code and see what is happening in this Go program:

The func main for our deployment code

In the preceding code, we establish a context derived from the background context. This is largely ineffectual in this scenario but would be a powerful tool in the future if we need to cancel a request that is taking too long. Next, we create clientSet, which is a strongly typed client for interacting with the Kubernetes API. We then use clientSet in createNamespace, deployNginx, and listenToPodLogs. Finally, we wait for a signal to terminate the program. That's it!

Next, let's delve into each function, starting with getClientSet.

Creating a ClientSet#

Let's take a look at getClientSet:

The getClientSet function

In the preceding code, we can see that we build flag bindings to either use the existing ~/.kube/config context or accept a kubeconfig file via an absolute file path. We then build a config using this flag or default. The config is then used to create *kubernetes.ClientSet. As we learned in the kubectl section, kubeconfig contains all the information we need to connect and authenticate to the server. We now have a client ready to interact with the Kubernetes cluster.

Next, let's see the ClientSet in action.

Creating a namespace#

Now that we have a ClientSet, we can use it to create the resource we need to deploy our load-balanced HTTP application. Let's take a look at createNamespace:

The function to create a namespace

In the preceding code, we build a corev1.Namespace structure, supplying the name in the ObjectMeta field. If we recall from our YAML example that created a namespace using kubectl, this field maps to metadata.name. The Go structures of the Kubernetes resource map closely to their YAML representations. Finally, we use clientSet to create the namespace via the Kubernetes API server and return the namespace. The metav1.CreateOptions contains some options for changing the behavior of the create operation, but we will not explore this structure in this course.

We have now created the namespace where we will deploy our application. Let's see how we will deploy the application.

Deploying the application into the namespace#

Now that we have clientSet and namespace created, we are ready to deploy the resources that will represent our application. Let's have a look at the deployNginx function:

The function to deploy NGINX application

In the preceding code, we create the NGINX deployment resource and wait for the replicas of the deployment to be ready. After the deployment is ready, the code creates the service resource to load balance across the pods in the deployment. Finally, we create the ingress resource to expose the service on a local host port.

Next, let's review each of these functions to understand what they are doing.

Creating the NGINX deployment#

The first function in deploying our application is createNginxDeployment:

The function to create NGINX deployment

The preceding code initializes matchLabel with a key/value pair that will be used to connect the Deployment with the Service. We also initialize ObjectMeta for the Deployment resource using the namespace and matchLabel. Next, we build a Deployment structure containing a spec with two desired replicas, a LabelSelector using the matchLabel we built earlier, and a pod template that will run a single container with the nginxdemos/hello:latest image exposing port 80 on the container. Finally, we create the deployment specifying the namespace and the Deployment structure we've built.

Now that we have created our Deployment, let's see how we wait for the pods in the Deployment to become ready.

Waiting for ready replicas to match desired replicas#

When a Deployment is created, pods for each replica need to be created and start running before they will be able to service requests. There is nothing about Kubernetes or the API requests we are authoring that require us to wait for these pods. This is here just to provide some user feedback and illustrate a use for the status portion of the resource. Let's take a look at how we wait for the Deployment state to match the desired state:

Waiting for the replicas to match the desired states

In the preceding code, we loop to check for the desired number of replicas to match the number of ready replicas and return if they do. If they do not match, then we sleep for a second and try again. This code is not very resilient, but it illustrates the goal-seeking nature of Kubernetes operations.

Now that we have a running deployment, we can build the Service to load-balance across the pods in the deployment.

Creating a service to load-balance#

The two pod replicas in the deployment are now running the NGINX demo on port 80, but each has its own interface. We can address traffic to each one individually, but it would be more convenient to address a single address and load-balance the requests. Let's create a Service resource to do that:

The function to create a Service resource

In the preceding code, we initialize the same matchLabel and ObjectMeta as we did in the deployment. However, instead of creating a Deployment resource, we create a Service resource structure, specifying the Selector to match on and the port to expose over Transmission Control Protocol (TCP). The Selector label is the key to ensuring that the correct pods are in the backend pool for the load balancer. Finally, we create the Service as we have with the other Kubernetes resources.

We only have one step left. We need to expose our service via an ingress so that we can send traffic into the cluster via a port on the local machine.

Creating an ingress to expose our application on a local host port#

At this point, we are unable to reach our service. We can forward traffic into the cluster via kubectl, but we’ll leave that for us to explore. We are going to create an ingress and open a port on our local host network. Let's see how we create the ingress resource:

The function to create ingress to expose application

In the preceding code, we initialize a prefix, the same objMeta as previously, and ingressPath, which will map the path prefix of /hello to the service name and port name we created. Yes, Kubernetes does the magic of tying the networking together for us! Next, we build the Ingress structure as we saw with the previous structures and create the ingress using clientSet. With this last bit, we deploy our entire application stack using Go and the Kubernetes API.

Next, let's return to main.go and look at how we can use Kubernetes to stream the logs of the pods to show the incoming HTTP requests while the program is running.

Streaming pod logs for the NGINX application#

The Kubernetes API exposes a bunch of great features for running workloads. One of the most basic and useful is the ability to access logs for running pods. Let's see how we can stream logs from multiple running pods to STDOUT:

The function to stream logs from multiple running pods

In the preceding code, listenToPodLogs lists the pods in the given namespace and then starts go func for each one. In go func, we use the Kubernetes API to request a stream of podLogs, which returns io.ReadCloser to deliver logs from the pod as they are created. We then tell STDOUT to read from that pipe, and the logs land in our STDOUT.

If we thought that getting logs from our running workloads was going to be a lot tougher than this, we wouldn't be alone. Kubernetes is quite complex, but the concept that everything is exposed as an API makes the platform incredibly flexible and programmable.

We have explored every function except waitForExitSignal, which is relatively trivial and doesn't add anything to the Kubernetes story told here.

Having explored this example of using the Kubernetes API to programmatically deploy an application using Go, we hope we take away from the experience a feeling of empowerment to go and learn, build, and feel relatively comfortable interacting with the Kubernetes API. There is so much more to the Kubernetes API, and it's ever-growing.

Interacting With the Kubernetes API

Extending Kubernetes With Custom Resources and Operators